Rand’s Whiteboard Friday from a few weeks ago, got me quite worked up. He states that, on the basis of some black hat friends’ experiments, user behaviour, especially click through rate (CTR), is not important to Google. First off, I’ll concede that I don’t think Google uses CTR, or Bounce Rate or Conversion Rate as measurements. I’m pretty sure that Google uses some other numbers that they can collect, and that we, as webmasters can’t possibly collect – plus, probably, time on-page, which both Google and webmasters can measure.Β
Before digging deeper, let’s take a step back, first. What’s Google’s goal? Google has a corporate mission statement, the “Ten Things”. What’s number one? “Focus on the user“.
Let’s have a little thought experiment. Assume that we have massive, high authority and trusted backlinks and all the technical on-page optimisations we know should give us page one, possibly even position one ranking. The page appears high in the results, but the description is dull and the page looks as though it has been put together by a small child.
What will users do when faced with a high ranking page that just doesn’t appeal? My prediction is that users will click less on the result that they should otherwise. And that those few that do click, and see the page, will mostly return to search again and go to a different result that might answer the question. As a webmaster, the only indications that we will see, are that we get fewer clicks than we might expect from keyword research, and that we get a high bounce rate, low time on page, and a low conversion rate. Google will see a disappointed searcher, who returns to search and click again.
We can’t see what Google sees. What Google has seen is that our page underperforms. We’re disappointing searchers with that poorly designed page. In fact, Google can probably do better than just seeing that we’ve disappointed users. Google has a lot of datacenters. How about if some of the datacenters show us higher and some lower than we’d expect, for our newly ranking page? Google can work out across the datacenters whether our abysmal performance is a fluke – for example a competitor may have a matching message just above our position, which might depress our CTR. So by checking between orderings on different datacenters, Google can get an idea about whether users think our page is unattractive, or less attractive than it should be, in order to justify ranking high on the first page.
What should Google with a poor performing page?
Keep it there, despite the searcher’s lack of interest? That doesn’t seem like a strategy to deliver the worlds best search engine. What I’ve observed is that a page with a poor CTR, poor bounce rate, poor time on page and poor conversion is less likely to stay high in the search results. In fact, I’ll bet you all, already know this, because there’s a common tool we webmasters use that shows exactly this behaviour, consistently and persistently.
It’s the blog.
You, or your copywriters, generate articles. Some stick and some fail. Why? The answer is, I think, user behaviour. I spotted the signs of user-mediated rank modulation some years ago, and I’ve been experimenting with it on my blog for the past few years. I have, for example, an article from 2006 that persistently brings in a handful of users every day. Another from 2007, a few from 2008, and so on. But the other articles immediately before and afterwards, despite keyword rich titles and on-page mentions, don’t deliver a sustained stream of readers. Similar backlinks to the blog. Some of the newsy articles that don’t appear in results even have more backlinks than the articles that are sustained.Β
I expect you’ve seen the same. Sometimes, one article catches, and another similar article fails the test of time. What separates them? I believe that part of the answer is backlinks, but the major factor that sustains them in search results is that Google monitors user reactions. If the search results are better for users with the page present, then it’ll be kept. If users find the content boring, out of date, and unhelpful, then the page will drop out of the rankings.
The largest impact of user behaviour is that we get to keep our position, or maybe even slide up a position or two.
Rand’s contention is that because user behaviour can be spoofed, that Google should not pay attention to user behaviour. I’ll gladly concede that user behaviour can be spoofed, but my defense is that if user behaviour can be successfully spoofed, that AdWords, especially the Content Network (AdSense) would be so stuffed with fraudulent clicks that it was unusable.
I used to be an AdWords Help Forum Top Contributor. I’ve written thousands of responses to AdWords advertisers, many about click fraud. I’ve investigated click fraud, even using artificial intelligence programs to help. I’m quite certain that Google controls the levels of click fraud by monitoring user behaviour.Β I want to make sure that point is well understood:
Google already monitors and assesses user behaviour. Paid search, in the volume that Google does it, only works because Google monitors and assesses user behaviour.
How can we tell?
I expect that some of you use search engine ranking tools. Those robots that make queries to Google, despite the terms and conditions that state you shouldn’t? What does that do for Google’s understanding of search results? Those robot queries will look like an extra search. Another impression. And an impression without a click. The CTR will be reduced by robots testing ranking. If you run paid search, as well, then you can see whether your CTR is affected by rank checking. Try it. Use a fairly rarely searched phrase, and run your tool on that. Monitor what happens in AdWords. Most of the time you’ll see that either Google doesn’t register an impression (they have dismissed the impression before it reaches the AdWords UI, as invalid) or it reaches the UI and is dismissed within the next few hours (as Google’s processing decides that the impression was not a valid user stream). Some may creep past the defenses. It certainly isn’t perfect, but they are doing it and it works well enough. That’s all that’s needed. Working well enough will do.
In fact, AdWords can give you a measured conversion for a keyword that has no impressions and no clicks, because Google’s user behaviour assessment has decided that the impression and click are invalid; you can also see clicks with no impressions, implying the software has at least two ways to determine validity.
If Google wasn’t good enough at measuring and assessing user behaviour, then advertisers would pay, mostly, for fraudsters to steal from them. I have no doubt that there is a level of fraud. I have no doubt that Google misidentifies some user behaviour. But they identify *enough* user behaviour to make AdWords work, well enough.Β
And that’s the final point I want to make – Google isn’t perfect. User data, including backlinks, isn’t perfect (what, you’ve never seen an undeclared paid backlink, ever?). The search results are not a clean and simple two or three factor resultant of data collection, but an evolving combination of corroborating data. High backlinks, but boring – you get dropped. Lower backlinks, but an interesting article for users – you may get to stay, or even drift up in the results, by a place or two.Β
Summary
Google already measures and assesses user behaviour, using data that webmasters don’t have. The checking is sophisticated enough that click fraud can be managed to tolerable levels. Not that there is no click fraud, but enough that the system is usable. Usable is good enough.Β
Webmasters do have CTR (implied from keyword research, or measured from paid search cross checking), bounce rate, time on page and conversion rates (from web analytics of various types). We can’t optimise using the data that only Google has, so we have to use what we can measure.
User behaviour, at a minimum, determines whether we stay as high in the results as our backlinks and on-page optimisation should create. At a maximum, user behaviour appears to modulate position upwards by only a few places, but most importantly, it stops us from falling off the results page altogether.
Google’s measurements of user behaviour are pretty sophisticated, if they can detect robots doing rank checking and users fraudulently clicking. The measurements aren’t perfect, but are better than failing to do the assessment at all. They are good enough to make sure that ROI is achievable on AdWords on the Content Network, despite the lower inherent CTR and the variability of presentation on that network (intrinsic problems, as well as the added complication of fraudulent site owners).Β
Dismissing user behaviour is, I think, an egregious error. The whole point, for Google, of search engine results, is to deliver a page that works for search users – not webmasters, not link builders, but searchers.
Clicking on our results, getting friends to click, and even paying for third parties to click, is no more likely succeed for SEO than it is for AdSense. If you pwn AdSense fraudulent clicking, then you can pwn SEO user behaviour clicking… but why would you bother, given that you’re already so wealthy from AdSense? I don’t believe that “simple” clicking systems will help, and certainly won’t help when you don’t understand why Google would be looking at user behaviour – and I don’t think Rand’s article shows sufficient comprehension of why Google wants to look at user behaviour, and that’s why the black hatters have been failing. It’s primarily not position, but retention.
I remain sufficiently confused by the multiplicity of factors that affect search ranking, that I’m willing to listen to contrary data… In the interim, Β I hope that you’ll agree that Google should be using user behaviour, and that they have the technology to assess user behaviour, honed in the advertising network, where fraudulent click cost is an immediate concern.Β Thanks for your attention.